In [1]:
import graphlab
In [2]:
image_train = graphlab.SFrame('image_train_data/')
image_test = graphlab.SFrame('image_test_data/')
Sketch summaries are techniques for computing summary statistics of data very quickly. In GraphLab Create, SFrames and SArrays include a method:
.sketch_summary()
which computes such summary statistics. Using the training data, compute the sketch summary of the ‘label’ column and interpret the results. What’s the least common category in the training data?
In [3]:
image_train['label'].sketch_summary()
Out[3]:
In most retrieval tasks, the data we have is unlabeled, thus we call these unsupervised learning problems. However, we have labels in this image dataset, and will use these to create one model for each of the 4 image categories, {‘dog’,’cat’,’automobile’,bird’}. To start, follow these steps:
Split the SFrame with the training data into 4 different SFrames. Each of these will contain data for 1 of the 4 categories above. Hint: if you use a logical filter to select the rows where the ‘label’ column equals ‘dog’, you can create an SFrame with only the data for images labeled ‘dog’.
Similarly to the image retrieval notebook you downloaded, you are going to create a nearest neighbor model using the 'deep_features' as the features, but this time create one such model for each category, using the training_data. You can call the model with the ‘dog’ data the dog_model, the one with the ‘cat’ data the cat_model, as so on. You now have a nearest neighbors model that can find the nearest ‘dog’ to any image you give it, the dog_model; one that can find the nearest ‘cat’, the cat_model; and so on.
Using these models, answer the following questions. The cat image below is the first in the test data
You can access this image, similarly to what we did in the iPython notebooks above, with this command:
image_test[0:1]
In [4]:
automobile = image_train.filter_by(['automobile'],'label')
cat = image_train.filter_by(['cat'],'label')
dog = image_train.filter_by(['dog'],'label')
bird = image_train.filter_by(['bird'],'label')
In [5]:
automobile_model = graphlab.nearest_neighbors.create(automobile, features=['deep_features'],
label='id')
In [6]:
cat_model = graphlab.nearest_neighbors.create(cat, features=['deep_features'],
label='id')
In [7]:
dog_model = graphlab.nearest_neighbors.create(dog, features=['deep_features'],
label='id')
In [8]:
bird_model = graphlab.nearest_neighbors.create(bird, features=['deep_features'],
label='id')
In [15]:
graphlab.canvas.set_target('ipynb')
image_test[0:1]
Out[15]:
In [16]:
image_test[0:1]['image'].show()
In [91]:
cat_model.query(image_test[0:1])
Out[91]:
In [92]:
def get_images_from_ids(query_result):
return image_train.filter_by(query_result['reference_label'],'id')
In [96]:
cat_image = image_train[image_train['id']==16289]
cat_image['image'].show()
In [93]:
get_images_from_ids(cat_model.query(image_test[0:1]))['image'].show()
In [97]:
dog_model.query(image_test[0:1])
Out[97]:
In [98]:
dog_image = image_train[image_train['id']==16976]
dog_image['image'].show()
In [94]:
get_images_from_ids(dog_model.query(image_test[0:1]))['image'].show()
When we queried a nearest neighbors model, the ‘distance’ column in the table above shows the computed distance between the input and each of the retrieved neighbors. In this question, you will use these distances to perform a classification task, using the idea of a nearest-neighbors classifier.
For the first image in the test data (image_test[0:1]), which we used above, compute the mean distance between this image at its 5 nearest neighbors that were labeled ‘cat’ in the training data (similarly to what you did in the previous question). Save this result.
Similarly, for the first image in the test data (image_test[0:1]), which we used above, compute the mean distance between this image at its 5 nearest neighbors that were labeled ‘dog’ in the training data (similarly to what you did in the previous question). Save this result.
On average, is the first image in the test data closer to its 5 nearest neighbors in the ‘cat’ data or in the ‘dog’ data? (In a later course, we will see that this is an example of what is called a k-nearest neighbors classifier, where we use the label of neighboring points to predict the label of a test point.)
In [21]:
cat_model.query(image_test[0:1])['distance'].mean()
Out[21]:
In [22]:
dog_model.query(image_test[0:1])['distance'].mean()
Out[22]:
A nearest neighbor classifier predicts the label of a point as the most common label of its nearest neighbors. In this question, we will measure the accuracy of a 1-nearest-neighbor classifier, i.e., predict the output as the label of the nearest neighbor in the training data. Although there are simpler ways of computing this result, we will go step-by-step here to introduce you to more concepts in nearest neighbors and SFrames, which will be useful later in this Specialization.
Training models: For this question, you will need the nearest neighbors models you learned above on the training data, i.e., the dog_model, cat_model, automobile_model and bird_model.
Spliting test data by label: Above, you split the train data SFrame into one SFrame for images labeled ‘dog’, another for those labeled ‘cat’, etc. Now, do the same for the test data. You can call the resulting SFrames
image_test_cat, image_test_dog, image_test_bird, image_test_automobile
In [23]:
image_test_automobile = image_test.filter_by(['automobile'],'label')
image_test_cat = image_test.filter_by(['cat'],'label')
image_test_dog = image_test.filter_by(['dog'],'label')
image_test_bird = image_test.filter_by(['bird'],'label')
dog_model.query()
our nearest neighbors models with a single image as the input, but you can actually query with a whole set of data, and it will find the nearest neighbors for each data point. Note that the input index will be stored in the ‘query_label’ column of the output SFrame.
Using this knowledge find the closest neighbor in to the dog test data using each of the trained models, e.g.,
dog_cat_neighbors = cat_model.query(image_test_dog, k=1)
finds 1 neighbor (that’s what k=1 does) to the dog test images (image_test_dog) in the cat portion of the training data (used to train the cat_model).
Now, do this for every combination of the labels in the training and test data.
i. dog_distances[‘dog-dog’] ---- storing dog_dog_neighbors[‘distance’]
ii. dog_distances[‘dog-cat’] ---- storing dog_cat_neighbors[‘distance’]
iii. dog_distances[‘dog-automobile’] ---- storing dog_automobile_neighbors[‘distance’]
iv. dog_distances[‘dog-bird’] ---- storing dog_bird_neighbors[‘distance’]
In [24]:
dog_cat_neighbors = cat_model.query(image_test_dog, k=1)
In [27]:
dog_dog_neighbors = dog_model.query(image_test_dog, k=1)
In [28]:
dog_automobile_neighbors = automobile_model.query(image_test_dog, k=1)
In [29]:
dog_bird_neighbors = bird_model.query(image_test_dog, k=1)
Hint: You can create a new SFrame from the columns of other SFrames by creating a dictionary with the new columns, as shown in this example:
new_sframe = graphlab.SFrame({‘foo’: other_sframe[‘foo’],‘bar’: some_other_sframe[‘bar’]})
In [33]:
dog_distances = graphlab.SFrame({'dog_automobile': dog_automobile_neighbors['distance'],
'dog_bird': dog_bird_neighbors['distance'],
'dog_cat': dog_cat_neighbors['distance'],
'dog_dog': dog_dog_neighbors['distance']
})
In [34]:
dog_distances.head()
Out[34]:
.apply()
on this SFrame to iterate line by line and compute the number of ‘dog’ test examples where the distance to the nearest ‘dog’ was lower than that to the other classes. You will do this in three steps:
i. Consider one row of the SFrame dog_distances. Let’s call this variable row. You can access each distance by calling, for example,
row[‘dog_cat’]
which, in example table above, will have value equal to 36.4196077068 for the first row.
Create a function starting with
def is_dog_correct(row):
which returns 1 if the value for row[‘dog_dog’] is lower than that of the other columns, and 0 otherwise. That is, returns 1 if this row is correctly classified by 1-nearest neighbors, and 0 otherwise.
In [71]:
def is_dog_correct(row):
if row['dog_dog'] <= min(row.values()):
return 1
else:
return 0
# dog_distances.apply(lambda row: 1 if row['dog_dog'] <= min(row.values()) else 0)
ii. Using the function is_dog_correct(row), you can check if 1 row is correctly classified. Now, you want to count how many rows are correctly classified. You could do a for loop iterating through each row and applying the function is_dog_correct(row). This method will be really slow, because the SFrame is not optimized for this type of operation.
Instead, we will use the .apply() method to iterate the function is_dog_correct for each row of the SFrame.
iii. Computing the number of correct predictions for ‘dog’: You can now call:
dog_distances.apply(is_dog_correct)
which will return an SArray (a column of data) with a 1 for every correct row and a 0 for every incorrect one.
In [73]:
dog_distances.apply(is_dog_correct)
Out[73]:
You can call:
.sum()
on the result to get the total number of correctly classified ‘dog’ images in the test set!
In [74]:
dog_distances.apply(is_dog_correct).sum()
Out[74]:
Hint: To make sure your code is working correctly, if you were to do steps d) and e) in this question to count the number of correctly classified ‘cat’ images in the test data, instead of ‘dog’, the result would be 548.
In [77]:
cat_distances = graphlab.SFrame({'cat_automobile': automobile_model.query(image_test_cat, k=1)['distance'],
'cat_bird': bird_model.query(image_test_cat, k=1)['distance'],
'cat_cat': cat_model.query(image_test_cat, k=1)['distance'],
'cat_dog': dog_model.query(image_test_cat, k=1)['distance'],
})
In [78]:
cat_distances.head()
Out[78]:
In [81]:
def is_cat_correct(row):
if row['cat_cat'] <= min(row.values()):
return 1
else:
return 0
In [82]:
cat_distances.apply(is_cat_correct).sum()
Out[82]:
In [87]:
dog_distances.apply(is_dog_correct).sum()/float(len(dog_distances))
Out[87]: